43 research outputs found

    Runtime Quantitative Verification of Self-Adaptive Systems

    Get PDF
    Software systems used in mission- and business-critical applications in domains including defence, healthcare, and finance must comply with strict dependability, performance, and other Quality-of-Service (QoS) requirements. Self-adaptive systems achieve this compliance under changing environmental conditions, evolving requirements and system failures by using closed-loop control to modify their behaviour and structure in response to these events. Runtime quantitative verification (RQV) is a mathematically-based approach that implements the closed-loop control of self-adaptive systems. Using runtime observations of a system and its environment, RQV updates stochastic models whose formal analysis underpins the adaptation decisions made within the control loop. The approach can identify and, under certain conditions, predict violation of QoS requirements, and can drive self-adaptation in ways guaranteed to restore or maintain compliance with these requirements. Despite its merits, RQV has significant computation and memory overheads, which restrict its applicability to small systems and to adaptations affecting only the configuration parameters of the system. In this thesis, we introduce RQV variants that improve the efficiency and scalability of the approach and extend its applicability to larger and more complex self-adaptive software systems, and to adaptations that modify the structure of a system. First, we integrate RQV with established efficiency improvement techniques from other software engineering areas. We use caching of recent analysis results, limited lookahead to precompute suitable adaptations for potential future changes, and nearly-optimal reconfiguration to eliminate the need for an exhaustive analysis of the entire reconfiguration space. Second, we introduce an RQV variant that incorporates evolutionary algorithms into the RQV process facilitating the efficient search through large reconfiguration spaces and enabling adaptations that include structural changes. Third, we propose an RQV-driven approach that decentralises the control loops in distributed self-adaptive systems. Finally, we devise an RQV-based methodology for the engineering of trustworthy self-adaptive systems. We evaluate the proposed RQV variants using prototype self-adaptive systems from several application domains, including an embedded system for unmanned underwater vehicles and a foreign exchange service-based system. Our results, subject to the adaptation scenarios used in the evaluation, demonstrate the effectiveness and generality of the new RQV variants

    Synthesis of Probabilistic Models for Quality-of-Service Software Engineering

    Get PDF
    An increasingly used method for the engineering of software systems with strict quality-of-service (QoS) requirements involves the synthesis and verification of probabilistic models for many alternative architectures and instantiations of system parameters. Using manual trial-and-error or simple heuristics for this task often produces suboptimal models, while the exhaustive synthesis of all possible models is typically intractable. The EvoChecker search-based software engineering approach presented in our paper addresses these limitations by employing evolutionary algorithms to automate the model synthesis process and to significantly improve its outcome. EvoChecker can be used to synthesise the Pareto-optimal set of probabilistic models associated with the QoS requirements of a system under design, and to support the selection of a suitable system architecture and configuration. EvoChecker can also be used at runtime, to drive the efficient reconfiguration of a self-adaptive software system. We evaluate EvoChecker on several variants of three systems from different application domains, and show its effectiveness and applicability

    Importance-Driven Deep Learning System Testing

    Get PDF
    Deep Learning (DL) systems are key enablers for engineering intelligent applications due to their ability to solve complex tasks such as image recognition and machine translation. Nevertheless, using DL systems in safety- and security-critical applications requires to provide testing evidence for their dependable operation. Recent research in this direction focuses on adapting testing criteria from traditional software engineering as a means of increasing confidence for their correct behaviour. However, they are inadequate in capturing the intrinsic properties exhibited by these systems. We bridge this gap by introducing DeepImportance, a systematic testing methodology accompanied by an Importance-Driven (IDC) test adequacy criterion for DL systems. Applying IDC enables to establish a layer-wise functional understanding of the importance of DL system components and use this information to guide the generation of semantically-diverse test sets. Our empirical evaluation on several DL systems, across multiple DL datasets and with state-of-the-art adversarial generation techniques demonstrates the usefulness and effectiveness of DeepImportance and its ability to guide the engineering of more robust DL systems

    UNDERSEA : An Exemplar for Engineering Self-Adaptive Unmanned Underwater Vehicles

    Get PDF
    Recent advances in embedded systems and underwater communications raised the autonomy levels in unmanned underwater vehicles (UUVs) from human-driven and scripted to adaptive and self-managing. UUVs can execute longer and more challenging missions, and include functionality that enables adaptation to unexpected oceanic or vehicle changes. As such, the simulated UUV exemplar UNDERSEA introduced in our paper facilitates the development, evaluation and comparison of self-adaptation solutions in a new and important application domain. UNDERSEA comes with predefined oceanic surveillance UUV missions, adaptation scenarios, and a reference controller implementation, all of which can easily be extended or replaced

    Self-Adaptive Software with Decentralised Control Loops

    Get PDF
    We present DECIDE, a rigorous approach to decentralising the control loops of distributed self-adaptive software used in mission-critical applications. DECIDE uses quantitative verification at runtime, first to agree individual component contributions to meeting system-level quality-of-service requirements, and then to ensure that components achieve their agreed contributions in the presence of changes and failures. All verification operations are carried out locally, using component-level models, and communication between components is infrequent. We illustrate the application of DECIDE and show its effectiveness using a case study from the unmanned underwater vehicle domain

    Probabilistic Program Performance Analysis

    Get PDF
    We introduce a tool-supported method for the formal analysis of timing, resource use, cost and other quality aspects of computer programs. The new method synthesises a Markov-chain model of the analysed code, computes this quantitative model’s transition probabilities using information from program logs, and employs probabilistic model checking to evaluate the performance properties of interest. Unlike existing solutions, our method can reuse the probabilistic model to accurately predict how the program performance would change if the code ran on a different hardware platform, used a new function library, or had a different usage profile. We show the effectiveness of our method by using it to analyse the performance of Java code from the Apache Commons Math library, the Android messaging app Telegram, and an implementation of the knapsack algorithm

    Towards Memory-Efficient Validation of Large XMI Models

    Get PDF
    Model validation is a common activity in model-driven engineering, where a model is checked against a set of consistency rules (also referred to as constraints) to assess whether it has desirable properties further to those that can be expressed by the metamodel that it conforms to (e.g. to check that all states in a state machine are reachable or that no classes in an object-oriented model are involved in circular inheritance relationships). Such constraints can be written in general-purpose (e.g. Java) or in task-specific validation languages such as the Object Constraint Language (OCL) or the Epsilon Validation Language (EVL). To check a model that is serialised in the OMG-standard XMI format against a set of constraints, the current state of practice requires loading the entire model into memory first. This can be problematic in cases where loading the model into memory requires more memory (heap space) than is available in the host machine, and is sub-optimal when carrying out distributed model validation over a number of machines. In this paper, we present an approach that uses static analysis to split sets of model validation constraints into sub-groups that operate on smaller subsets of the model. Combined with existing XMI partial loading capabilities, the proposed approach makes it possible to check larger XMI - based models on a single machine and to potentially improve efficiency when checking models in a distributed setting

    Search-Based Synthesis of Probabilistic Models for Quality-of-Service Software Engineering

    Get PDF
    The formal verification of finite-state probabilistic models supports the engineering of software with strict quality-of-service (QoS) requirements. However, its use in software design is currently a tedious process of manual multiobjective optimisation. Software designers must build and verify probabilistic models for numerous alternative architectures and instantiations of the system parameters. When successful, they end up with feasible but often suboptimal models. The EvoChecker search-based software engineering approach and tool introduced in our paper employ multiobjective optimisation genetic algorithms to automate this process and considerably improve its outcome. We evaluate EvoChecker for six variants of two software systems from the domains of dynamic power management and foreign exchange trading. These systems are characterised by different types of design parameters and QoS requirements, and their design spaces comprise between 2E+14 and 7.22E+86 relevant alternative designs. Our results provide strong evidence that EvoChecker significantly outperforms the current practice and yields actionable insights for software designers
    corecore